130 research outputs found

    Timescales of outlet-glacier flow with negligible basal friction: Theory, observations and modeling

    Get PDF
    The timescales of the flow and retreat of Greenland's and Antarctica's outlet glaciers and their potential instabilities are arguably the largest uncertainty in future sea-level projections. Here we derive a scaling relation that allows the comparison of the timescales of observed complex ice flow fields with geometric similarity. The scaling relation is derived under the assumption of fast, laterally confined, geometrically similar outlet-glacier flow over a slippery bed, i.e., with negligible basal friction. According to the relation, the time scaling of the outlet flow is determined by the product of the inverse of (1) the fourth power of the width-To-length ratio of its confinement, (2) the third power of the confinement depth and (3) the temperature-dependent ice softness. For the outflow at the grounding line of streams with negligible basal friction, this means that the volume flux is proportional to the ice softness and the bed depth, but goes with the fourth power of the gradient of the bed and with the fifth power of the width of the stream. We show that the theoretically derived scaling relation is supported by the observed velocity scaling of outlet glaciers across Greenland as well as by idealized numerical simulations of marine ice-sheet instabilities (MISIs) as found in Antarctica. Assuming that changes in the ice-flow velocity due to ice-dynamic imbalance are proportional to the equilibrium velocity, we combine the scaling relation with a statistical analysis of the topography of 13 MISI-prone Antarctic outlets. Under these assumptions, the timescales in response to a potential destabilization are fastest for Thwaites Glacier in West Antarctica and Mellor, Ninnis and Cook Glaciers in East Antarctica; between 16 and 67 times faster than for Pine Island Glacier. While the applicability of our results is limited by several strong assumptions, the utilization and potential further development of the presented scaling approach may help to constrain timescale estimates of outlet-glacier flow, augmenting the commonly exploited and comparatively computationally expensive approach of numerical modeling

    Similitude of ice dynamics against scaling of geometry and physical parameters

    Get PDF
    The concept of similitude is commonly employed in the fields of fluid dynamics and engineering but rarely used in cryospheric research. Here we apply this method to the problem of ice flow to examine the dynamic similitude of isothermal ice sheets in shallow-shelf approximation against the scaling of their geometry and physical parameters. Carrying out a dimensional analysis of the stress balance we obtain dimensionless numbers that characterize the flow. Requiring that these numbers remain the same under scaling we obtain conditions that relate the geometric scaling factors, the parameters for the ice softness, surface mass balance and basal friction as well as the ice-sheet intrinsic response time to each other. We demonstrate that these scaling laws are the same for both the (two-dimensional) flow-line case and the three-dimensional case. The theoretically predicted ice-sheet scaling behavior agrees with results from numerical simulations that we conduct in flow-line and three-dimensional conceptual setups. We further investigate analytically the implications of geometric scaling of ice sheets for their response time. With this study we provide a framework which, under several assumptions, allows for a fundamental comparison of the ice-dynamic behavior across different scales. It proves to be useful in the design of conceptual numerical model setups and could also be helpful for designing laboratory glacier experiments. The concept might also be applied to real-world systems, e.g., to examine the response times of glaciers, ice streams or ice sheets to climatic perturbations

    Efficient Hardware Implementation of Constant Time Sampling for HQC

    Full text link
    HQC is one of the code-based finalists in the last round of the NIST post quantum cryptography standardization process. In this process, security and implementation efficiency are key metrics for the selection of the candidates. A critical compute kernel with respect to efficient hardware implementations and security in HQC is the sampling method used to derive random numbers. Due to its security criticality, recently an updated sampling algorithm was presented to increase its robustness against side-channel attacks. In this paper, we pursue a cross layer approach to optimize this new sampling algorithm to enable an efficient hardware implementation without comprising the original algorithmic security and side-channel attack robustness. We compare our cross layer based implementation to a direct hardware implementation of the original algorithm and to optimized implementations of the previous sampler version. All implementations are evaluated using the Xilinx Artix 7 FPGA. Our results show that our approach reduces the latency by a factor of 24 compared to the original algorithm and by a factor of 28 compared to the previously used sampler with significantly less resources

    Stabilizing the West Antarctic Ice Sheet by surface mass deposition

    Get PDF
    There is evidence that a self-sustaining ice discharge from the West Antarctic Ice Sheet (WAIS) has started, potentially leading to its disintegration. The associated sea level rise of more than 3m would pose a serious challenge to highly populated areas including metropolises such as Calcutta, Shanghai, New York City, and Tokyo. Here, we show that the WAIS may be stabilized through mass deposition in coastal regions around Pine Island and Thwaites glaciers. In our numerical simulations, a minimum of 7400 Gt of additional snowfall stabilizes the flow if applied over a short period of 10 years onto the region (−2 mm year−1 sea level equivalent). Mass deposition at a lower rate increases the intervention time and the required total amount of snow. We find that the precise conditions of such an operation are crucial, and potential benefits need to be weighed against environmental hazards, future risks, and enormous technical challenges. Copyright © 2019 The Authors

    Stabilizing effect of mélange buttressing on the marine ice-cliff instability of the West Antarctic Ice Sheet

    Get PDF
    Owing to global warming and particularly high regional ocean warming, both Thwaites and Pine Island Glaciers in the Amundsen region of the Antarctic Ice Sheet could lose their buttressing ice shelves over time. We analyse the possible consequences using the parallel ice sheet model (PISM), applying a simple cliff-calving parameterization and an ice mélange-buttressing model. We find that the instantaneous loss of ice-shelf buttressing, due to enforced ice-shelf melting, initiates grounding-line retreat and triggers marine ice sheet instability (MISI). As a consequence, the grounding line progresses into the interior of the West Antarctic Ice Sheet and leads to a sea level contribution of 0.6m within 100a. By subjecting the exposed ice cliffs to cliff calving using our simplified parameterization, we also analyse marine ice cliff instability (MICI). In our simulations it can double or even triple the sea level contribution depending on the only loosely constrained parameter that determines the maximum cliff-calving rate. The speed of MICI depends on this upper bound of the calving rate, which is given by the ice mélange buttressing the glacier. However, stabilization of MICI may occur for geometric reasons. Because the embayment geometry changes as MICI advances into the interior of the ice sheet, the upper bound on calving rates is reduced and the progress of MICI is slowed down. Although we cannot claim that our simulations bear relevant quantitative estimates of the effect of ice-mélange buttressing on MICI, the mechanism has the potential to stop the instability. Further research is needed to evaluate its role for the past and future evolution of the Antarctic Ice Sheet

    Search for New Physics in rare decays at LHCb

    Full text link
    Rare heavy flavor decays provide stringent tests of the Standard Model of particle physics and allow to test for possible new Physics scenarios. The LHCb experiment at CERN is the ideal place for these searches as it has recorded the worlds largest sample of beauty mesons. The status of the rare decay analyses with 1\invfb of \sqrt s = 7\tev of pppp--collisions collected by the LHCb experiment in 2011 is reviewed. The worlds most precise measurements of the angular structure of \BdToKstmm decays is discussed, as well as the isospin asymmetry measurement in \decay{B}{\kaon^{(*)} \mup\mun} decays. The most stringent upper exclusion limit on the branching fraction of \Bsmm decays is shown, as well as searches for lepton number and lepton flavor violating processes.Comment: 6 pages, Proceedings for an invited talk at the 4th Workshop on Theory, Phenomenology and Experiments in Heavy Flavour Physics, Capri, Italy, 11-13 June 2012; updated reference

    Phylogenomic analysis of natural products biosynthetic gene clusters allows discovery of arseno-organic metabolites in model streptomycetes

    Get PDF
    We are indebted with Marnix Medema, Paul Straight and Sean Rovito, for useful discussions and critical reading of the manuscript, as well as with Alicia Chagolla and Yolanda Rodriguez of the MS Service of Unidad Irapuato, Cinvestav, and Araceli Fernandez for technical support in high-performance computing. This work was funded by Conacyt Mexico (grants No. 179290 and 177568) and FINNOVA Mexico (grant No. 214716) to FBG. PCM was funded by Conacyt scholarship (No. 28830) and a Cinvestav posdoctoral fellowship. JF and JFK acknowledge funding from the College of Physical Sciences, University of Aberdeen, UK.Peer reviewedPublisher PD

    Die Klassifizierung von Schulen als Mittel der Schulsteuerung und lokalen Profilbildung. Begleitumstände, nachkriegszeitliche Anpassungsprobleme und aktuelle Folgen der Klassifizierung des berufsbildenden Schulwesens seit den dreißiger Jahren des 20. Jahrhunderts

    Full text link
    Die Unterscheidung zwischen Berufsschulen, Berufsfachschulen und Fachschulen geht auf einen Erlass des Reichsministeriums für Wissenschaft, Erziehung und Volksbildung von 1937 zurück. Der Erlass, seine Genese und seine langfristigen strukturellen Auswirkungen auf die Benennung der beruflichen Schulen werden unter Zugrundelegung von Dokumenten aus dem DFG-Forschungsprojekt "Datenhandbuch zur deutschen Bildungsgeschichte: Band V: Das Berufsbildende Schulsystem in Deutschland 1815-1945" untersucht und in einen größeren Entwicklungszusammenhang eingeordnet. Besondere Aufmerksamkeit gilt dem Verhältnis zwischen der in den 1930er-Jahren entstandenen Klassifikation, dem Funktionszuwachs der beruflichen Schulen und ihrer Verflechtung mit dem Abschluss- und Berechtigungssystem der allgemeinbildenden Schulen. (DIPF/Orig.)The differentiation between vocational schools, training colleges, and technical colleges goes back to an edict decreed by the German ministry for science, schooling, and national education in 1937. This edict, its origins and its long-term impact on the designation of vocational schools are examined and placed within a broader framework of development on the basis of documents provided by a research project sponsored by the German Research Association (DFG), i.e. the "Data Handbook on the History of German Education: Vol. V: The German Vocational School System, 1815-1945". Special emphasis is placed upon the relation between the classification which evolved during the 1930s, the increase in functions served by the vocational schools, and their interconnection with the system of degrees and entitlements of the general schools. (DIPF/Orig.

    On Sparse Hitting Sets: From Fair Vertex Cover to Highway Dimension

    Get PDF
    We consider the Sparse Hitting Set (Sparse-HS) problem, where we are given a set system (V,?,?) with two families ?,? of subsets of the universe V. The task is to find a hitting set for ? that minimizes the maximum number of elements in any of the sets of ?. This generalizes several problems that have been studied in the literature. Our focus is on determining the complexity of some of these special cases of Sparse-HS with respect to the sparseness k, which is the optimum number of hitting set elements in any set of ? (i.e., the value of the objective function). For the Sparse Vertex Cover (Sparse-VC) problem, the universe is given by the vertex set V of a graph, and ? is its edge set. We prove NP-hardness for sparseness k ? 2 and polynomial time solvability for k = 1. We also provide a polynomial-time 2-approximation algorithm for any k. A special case of Sparse-VC is Fair Vertex Cover (Fair-VC), where the family ? is given by vertex neighbourhoods. For this problem it was open whether it is FPT (or even XP) parameterized by the sparseness k. We answer this question in the negative, by proving NP-hardness for constant k. We also provide a polynomial-time (2-1/k)-approximation algorithm for Fair-VC, which is better than any approximation algorithm possible for Sparse-VC or the Vertex Cover problem (under the Unique Games Conjecture). We then switch to a different set of problems derived from Sparse-HS related to the highway dimension, which is a graph parameter modelling transportation networks. In recent years a growing literature has shown interesting algorithms for graphs of low highway dimension. To exploit the structure of such graphs, most of them compute solutions to the r-Shortest Path Cover (r-SPC) problem, where r > 0, ? contains all shortest paths of length between r and 2r, and ? contains all balls of radius 2r. It is known that there is an XP algorithm that computes solutions to r-SPC of sparseness at most h if the input graph has highway dimension h. However it was not known whether a corresponding FPT algorithm exists as well. We prove that r-SPC and also the related r-Highway Dimension (r-HD) problem, which can be used to formally define the highway dimension of a graph, are both W[1]-hard. Furthermore, by the result of Abraham et al. [ICALP 2011] there is a polynomial-time O(log k)-approximation algorithm for r-HD, but for r-SPC such an algorithm is not known. We prove that r-SPC admits a polynomial-time O(log n)-approximation algorithm

    Scalable high-precision trimming of photonic resonances by polymer exposure to energetic beams

    Get PDF
    Integrated photonic circuits (PICs) have seen an explosion in interest, through to commercialization in the past decade. Most PICs rely on sharp resonances to modulate, steer, and multiplex signals. However, the spectral characteristics of high-quality resonances are highly sensitive to small variations in fabrication and material constants, which limits their applicability. Active tuning mechanisms are commonly employed to account for such deviations, consuming energy and occupying valuable chip real estate. Readily employable, accurate, and highly scalable mechanisms to tailor the modal properties of photonic integrated circuits are urgently required. Here, we present an elegant and powerful solution to achieve this in a scalable manner during the semiconductor fabrication process using existing lithography tools: by exploiting the volume shrinkage exhibited by certain polymers to permanently modulate the waveguide’s effective index. This technique enables broadband and lossless tuning with immediate applicability in wide-ranging applications in optical computing, telecommunications, and free-space optics
    • …
    corecore